-
-
Notifications
You must be signed in to change notification settings - Fork 1.1k
Replication Crashes on iPad with 500 MB Data: OPFS JSON Parse Error After Crash (RxDB 16.9.0) #7074
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Comments
We write JSON of documents in blocks in a way where either the full json is stored or nothing. |
It was just a hypothesis, so I’m not entirely sure. |
Its possible to simulate less ram by running the browser in a docker container where you can define how much ram can be used. |
@pubkey I was able to reproduce it on a chromium browser on ubuntu : I also have this error sometimes : Since swapping indexDB with OPFS (+sharding), the app has been unpredictable on the ipad, we didn't have these errors with indexDB, but we had to switch because indexDB couldn't handle the size of the data we replicate. |
I also feel like the app is using a lot more RAM with OPFS than with indexDB |
For context and after more testing, I get the error in the second screenshot first and after reloading the page I get the error in the first screenshot. |
does a restart fix the memory and it goes back to normal? Or will it start up again with the same amount of memory usage? Do you have any indication if the memory goes up either on reads or on writes? |
For context the screenshot was taken using dev mode, not on users' devices, but even on the ipad we are seeing some app crashes due to high memory usage, so it restart until it saturated again. I will share our experience below.
Aside from the initial replication where most of the writing happens, our app is mostly read-heavy with some minimal writings for time to time. Just to share the broader experience for context of the users of our PWA in prod: After switching: But then we ran into this memory issue: The frustrating part is that the migration made things worse for users who were previously fine, and it still doesn’t fix things for the ones we were targeting. We’re now testing a rollback to IndexedDB but now with sharding + key compression, this is the result of initial tests: We’d love your input on whether this approach makes sense, and whether there’s anything else we could try to stabilize it further. Thanks again! |
Can you reproduce the memory leak without dev-mode? |
When we test the app on an iPad (outside of dev mode), we can clearly see that it consumes significantly more RAM when using OPFS storage. Using tools like Device Monitor, RAM usage stays at around 98% consistently. It could be a memory leak, but it might also be that OPFS storage is just very memory-intensive. Debugging this is tricky though, as we only have access to the minified code and not the version with source maps. |
Do you have any handy tools to track memory usage ? |
I only use chrome devtools with the memory profiler. Can you post your schema? Or send it to me via discord if it contains private information. The questions is if the memory is really leaking with OPFS or if you store have data that just has a high memory footprint because of how OPFS stores index-to-doc-mappings in memory. |
@pubkey I have sent you the schema on discord. |
You schema has Your schema does not have any indexes. This further indicates that there indeed is a memory-leak in OPFS storage itself. |
The length will not exceed 20, we will change that.
We don't use indexes for this collection because we query all the data at once |
This issue has been automatically marked as stale because it has not had recent activity. It will be closed soon. Please update it or it may be closed to keep our repository organized. The best way is to add some more information or make a pull request with a test case. Also you might get help in fixing it at the RxDB Community Chat If you know you will continue working on this, just write any message to the issue (like "ping") to remove the stale tag. |
Uh oh!
There was an error while loading. Please reload this page.
I’m encountering an issue when replicating a large dataset (approximately 500 MB) on an iPad. During replication, the device’s memory eventually becomes saturated, and the app crashes and restarts. After restarting, replication does not resume properly; instead, I receive the following error:
Steps to Reproduce:
1. Replicate a collection of roughly 500 MB on an iPad (about 1 Gb with all the collections combined).
2. Allow replication to run until the iPad memory is saturated.
3. Once the memory is exhausted, the app crashes and restarts.
4. On restart, replication does not resume, and instead, the worker displays the error mentioned above.
Hypothesis:
My suspicion is that due to the crash, the app did not manage to complete writing all the data into the OPFS storage. As a result, only part of the data gets stored, leading to a corrupted JSON file (with, for example, a missing closing bracket). When the replication resumes and attempts to read from this file, it runs into a parsing error because of the corrupted data.
Questions / Requests for Assistance:
1. Is it possible that an unexpected crash during replication would leave the OPFS storage in a corrupted state?
3. Are there recommended practices or changes in RxDB that could help prevent this issue?
4• Is there a way to safely reset the database or detect/recover from such a corruption when an app crash occurs?
5• Any suggestions on handling large replication tasks on resource-constrained environments like an iPad to avoid such crashes?
Additional Notes:
• Since I cannot directly access the OPFS storage on the iPad to inspect the file contents, any guidance on remote debugging or logging would be helpful.
I appreciate any help or suggestions on how to resolve this issue. Thank you!
The text was updated successfully, but these errors were encountered: